3 research outputs found

    Quantitative study about the estimated impact of the AI Act

    Full text link
    With the Proposal for a Regulation laying down harmonised rules on Artificial Intelligence (AI Act) the European Union provides the first regulatory document that applies to the entire complex of AI systems. While some fear that the regulation leaves too much room for interpretation and thus bring little benefit to society, others expect that the regulation is too restrictive and, thus, blocks progress and innovation, as well as hinders the economic success of companies within the EU. Without a systematic approach, it is difficult to assess how it will actually impact the AI landscape. In this paper, we suggest a systematic approach that we applied on the initial draft of the AI Act that has been released in April 2021. We went through several iterations of compiling the list of AI products and projects in and from Germany, which the Lernende Systeme platform lists, and then classified them according to the AI Act together with experts from the fields of computer science and law. Our study shows a need for more concrete formulation, since for some provisions it is often unclear whether they are applicable in a specific case or not. Apart from that, it turns out that only about 30\% of the AI systems considered would be regulated by the AI Act, the rest would be classified as low-risk. However, as the database is not representative, the results only provide a first assessment. The process presented can be applied to any collections, and also repeated when regulations are about to change. This allows fears of over- or under-regulation to be investigated before the regulations comes into effect.Comment: The raw data and the various categorizations (including the preprocessing steps) are submitted as wel

    What Do We Want From Explainable Artificial Intelligence (XAI)? -- A Stakeholder Perspective on XAI and a Conceptual Model Guiding Interdisciplinary XAI Research

    Get PDF
    Previous research in Explainable Artificial Intelligence (XAI) suggests that a main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems (we call these stakeholders' desiderata) in a variety of contexts. However, the literature on XAI is vast, spreads out across multiple largely disconnected disciplines, and it often remains unclear how explainability approaches are supposed to achieve the goal of satisfying stakeholders' desiderata. This paper discusses the main classes of stakeholders calling for explainability of artificial systems and reviews their desiderata. We provide a model that explicitly spells out the main concepts and relations necessary to consider and investigate when evaluating, adjusting, choosing, and developing explainability approaches that aim to satisfy stakeholders' desiderata. This model can serve researchers from the variety of different disciplines involved in XAI as a common ground. It emphasizes where there is interdisciplinary potential in the evaluation and the development of explainability approaches.Comment: 57 pages, 2 figures, 1 table, to be published in Artificial Intelligence, Markus Langer, Daniel Oster and Timo Speith share first-authorship of this pape
    corecore